Introduction

This notebook presents Keypoint Detection using a CNN on the CAT dataset.

Resources

Imports

In [1]:
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
import PIL
import PIL.ImageDraw

Limit TensorFlow GPU memory usage

In [2]:
import tensorflow as tf
gpu_options = tf.GPUOptions(allow_growth=True)  # init TF ...
config=tf.ConfigProto(gpu_options=gpu_options)  # w/o taking ...
with tf.Session(config=config): pass            # all GPU memory

Configuration

Point this to dataset directory, folder should contain CAT_00, CAT_01 and so on.

In [3]:
dataset_location = '/home/marcin/Datasets/cat-dataset/cats/'

Helpers

In [4]:
def plot_images(indices, images, features):
    def draw_keypoints(img, keypoints, r=2, c='red'):
        """Draw keypoints on PIL image"""
        draw = PIL.ImageDraw.Draw(img)
        for x, y in keypoints:
            draw.ellipse([x-r, y-r, x+r, y+r], c)
        return img
    
    _, iw, ih, _ = images.shape
    assert iw == ih
    

    tmp_images = images[indices]
    tmp_features = features[indices]
    predictions = model.predict(tmp_features)
    kps = (predictions * (iw // 2)) + (iw // 2)

    _, axes = plt.subplots(nrows=1, ncols=len(indices), figsize=[12,4])
    if len(indices) == 1: axes = [axes]

    for i, idx in enumerate(indices):
        img = PIL.Image.fromarray(images[idx])
        axes[i].imshow(draw_keypoints(img, kps[i]))
        axes[i].axis('off')
    
    plt.show()

Load Dataset

In this section we:

  • load images and keypoints from folder structure
  • resize to 224x224 and save into numpy .npz file

Subfolders within dataset

In [5]:
folders_all = ['CAT_00', 'CAT_01', 'CAT_02', 'CAT_03', 'CAT_04', 'CAT_05', 'CAT_06']

Get paths to all images

In [6]:
def build_image_files_list(folders):
    image_files_list = []
    for folder in folders:
        wild_path = os.path.join(dataset_location, folder, '*.jpg')
        image_files_list.extend(sorted(glob.glob(wild_path)))
    return image_files_list
In [7]:
image_paths_all = build_image_files_list(folders_all)
In [8]:
print('Nb images:', len(image_paths_all))
image_paths_all[:3]
Nb images: 9997
Out[8]:
['/home/marcin/Datasets/cat-dataset/cats/CAT_00/00000001_000.jpg',
 '/home/marcin/Datasets/cat-dataset/cats/CAT_00/00000001_005.jpg',
 '/home/marcin/Datasets/cat-dataset/cats/CAT_00/00000001_008.jpg']

Helper to load keypoint data from .cat files

In [9]:
def load_keypoints(path):    
    """
        .cat is a single-line text file in format: 'nb_keypoints x1, y1, x2, y2, ...'
    """
    with open(path, 'r') as f:
        line = f.read().split()  # [nb_keypoints, x1, y1, x2, y2, ...]
    keypoints_nb = int(line[0])  # int
    keypoints_1d = np.array(line[1:], dtype=int)  # np.ndarray, [x1, y1, x2, y2, ...]
    keypoints_xy = keypoints_1d.reshape((-1, 2))  # np.ndarray, [[x1, y1], [x2, y2], ...]
    assert keypoints_nb == len(keypoints_xy)
    assert keypoints_nb == 9                # always nine keypoints, eyes, nose, two ears
    return keypoints_xy                     # np.ndarray, [[x1, y1], [x2, y2], ...]

Helper to draw keypoints on the image

In [10]:
def draw_keypoints(img, keypoints, r=2, c='red'):
    """Draw keypoints on PIL image"""
    draw = PIL.ImageDraw.Draw(img)
    for x, y in keypoints:
        draw.ellipse([x-r, y-r, x+r, y+r], c)
    return img

Open single image and load corresponding keypoints

In [11]:
example_path = image_paths_all[0]
img = PIL.Image.open(example_path)
kps = load_keypoints(example_path+'.cat')

Show example keypoints

In [12]:
display(kps)
array([[175, 160],
       [239, 162],
       [199, 199],
       [149, 121],
       [137,  78],
       [166,  93],
       [281, 101],
       [312,  96],
       [296, 133]])

Show example image

In [13]:
display(draw_keypoints(img.copy(), kps))

Helper to scale image and keypoints

In [14]:
def scale_img_kps(image, keypoints, target_size):
    width, height = image.size
    ratio_w = width / target_size
    ratio_h = height / target_size
    
    image_new = image.resize((target_size, target_size), resample=PIL.Image.LANCZOS)
    
    keypoints_new = np.zeros_like(keypoints)
    keypoints_new[range(len(keypoints_new)), 0] = keypoints[:,0] / ratio_w
    keypoints_new[range(len(keypoints_new)), 1] = keypoints[:,1] / ratio_h
    
    return image_new, keypoints_new

Test it

In [15]:
img2, kps2 = scale_img_kps(img, kps, target_size=224)
display(draw_keypoints(img2.copy(), kps2))

Helper to load and transform both input image and keypoints

In [16]:
def load_image_keypoints(image_path, keypoints_path, target_size):
    image = PIL.Image.open(image_path)
    keypoints = load_keypoints(keypoints_path)
    image_new, keypoints_new = scale_img_kps(image, keypoints, target_size)
    return image, keypoints, image_new, keypoints_new

Show couple more examples

In [17]:
idx = 21

image, keypoints, image_new, keypoints_new = load_image_keypoints(
    image_paths_all[idx], image_paths_all[idx]+'.cat', target_size=224)
display(draw_keypoints(image.copy(), keypoints))
display(draw_keypoints(image_new.copy(), keypoints_new))

Preprocess Images

In [18]:
images_list = []
keypoints_list = []
for i, image_path in enumerate(image_paths_all):
    _, _, image_new, keypoints_new = \
        load_image_keypoints(image_path, image_path+'.cat', target_size=224)
    
    image_arr = np.array(image_new)
    # assert image_arr.shape == (224, 224, 3)
    # assert 0 <= image_arr.min() <= 255

    images_list.append(image_arr)
    keypoints_list.append(keypoints_new)
    
    if i % 1000 == 0:
        print('i:', i)
        
images = np.array(images_list)
keypoints = np.array(keypoints_list)
i: 0
i: 1000
i: 2000
i: 3000
i: 4000
i: 5000
i: 6000
i: 7000
i: 8000
i: 9000
In [19]:
print('images.shape:', images.shape)
print('images.dtype:', images.dtype)
print('images.min()', images.min())
print('images.max()', images.max())
images.shape: (9997, 224, 224, 3)
images.dtype: uint8
images.min() 0
images.max() 255

Note that some keypoints are outside of image (e.g. when cat ear is cropped out)

In [20]:
print('keypoints.shape:', keypoints.shape)
print('keypoints.dtype:', keypoints.dtype)
print('keypoints.min()', keypoints.min())
print('keypoints.max()', keypoints.max())
keypoints.shape: (9997, 9, 2)
keypoints.dtype: int64
keypoints.min() -139
keypoints.max() 295

Sanity check

In [21]:
idx = 1

display(draw_keypoints(PIL.Image.fromarray(images[idx]).copy(), keypoints[idx]))

Save Data

In [22]:
dataset_npz = os.path.join(dataset_location, 'cats_224.npz')
print(dataset_npz)
/home/marcin/Datasets/cat-dataset/cats/cats_224.npz
In [23]:
# np.savez(dataset_npz, images=images, keypoints=keypoints)

Preprocess

In this section we:

  • load 224x224 images and keypoints from .npz file
  • apply normalization to images
  • normalize keypoints to -1..1 range
  • pass through model to get bottleneck features (unused in this notebook)
  • save into second .npz file

Dataset file

In [5]:
dataset_npz = os.path.join(dataset_location, 'cats_224.npz')
print(dataset_npz)
/home/marcin/Datasets/cat-dataset/cats/cats_224.npz

Load data

In [6]:
npzfile = np.load(dataset_npz)
images = npzfile['images']
keypoints = npzfile['keypoints']

Preprocess

Convert input into ImageNet format. This converts to float, scales and offsets to match distribution of ImageNet training data.

In [7]:
features = tf.keras.applications.mobilenet_v2.preprocess_input(images)
print('features.shape:', features.shape)
print('features.dtype:', features.dtype)
print('features.min()', features.min())
print('features.max()', features.max())
features.shape: (9997, 224, 224, 3)
features.dtype: float32
features.min() -1.0
features.max() 1.0

Convert targets to range -1..1

In [8]:
targets = (keypoints - 112) / 112
print('targets.shape:', targets.shape)
print('targets.dtype:', targets.dtype)
print('targets.min()', targets.min())
print('targets.max()', targets.max())
targets.shape: (9997, 9, 2)
targets.dtype: float64
targets.min() -2.2410714285714284
targets.max() 1.6339285714285714

Save Data

In [10]:
processed_npz = os.path.join(dataset_location, 'processed_224.npz')
print(processed_npz)
/home/marcin/Datasets/cat-dataset/cats/processed_224.npz
In [12]:
np.savez(processed_npz, features=features, targets=targets)

Train End-to-End

In [5]:
dataset_npz = os.path.join(dataset_location, 'cats_224.npz')
processed_npz = os.path.join(dataset_location, 'processed_224.npz')
print(dataset_npz)
print(processed_npz)
/home/marcin/Datasets/cat-dataset/cats/cats_224.npz
/home/marcin/Datasets/cat-dataset/cats/processed_224.npz
In [6]:
npzfile = np.load(dataset_npz)
images = npzfile['images']

npzfile = np.load(processed_npz)
features = npzfile['features']
targets = npzfile['targets']

Split into training and validation

In [7]:
split = 8000
train_images = images[:split]
train_features = features[:split]
train_targets = targets[:split]
valid_images = images[split:]
valid_features = features[split:]
valid_targets = targets[split:]

Define model

In [8]:
X_inputs = tf.keras.layers.Input(shape=(224, 224, 3))

mobilenetv2 = tf.keras.applications.mobilenet_v2.MobileNetV2(
    input_shape=(224, 224, 3), alpha=1.0, include_top=False,
    weights='imagenet', input_tensor=X_inputs, pooling='max')

X = tf.keras.layers.Dense(128, activation='relu')(mobilenetv2.layers[-1].output)
X = tf.keras.layers.Dense(64, activation='relu')(X)
X = tf.keras.layers.Dense(18, activation='linear')(X)
X = tf.keras.layers.Reshape((9, 2))(X)

model = tf.keras.models.Model(inputs=X_inputs, outputs=X)
model.compile(optimizer=tf.keras.optimizers.Adam(), loss='mse')
WARNING:tensorflow:From /home/marcin/.anaconda/envs/tfgpu113/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/marcin/.anaconda/envs/tfgpu113/lib/python3.7/site-packages/tensorflow/python/keras/utils/losses_utils.py:170: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.

Custom callback for plotting

In [9]:
class CallbackPlot(tf.keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        pass

    def on_epoch_end(self, batch, logs={}):
        plot_images([10, 20, 30, 40, 50, 60], train_images, train_features)
        plot_images([10, 20, 30, 40, 50, 60], valid_images, valid_features)

Show some cats before training. Most probably there won't be any keypoints shown

In [10]:
plot_images([10, 20, 30, 40, 50, 60], train_images, train_features)
plot_images([10, 20, 30, 40, 50, 60], valid_images, valid_features)

Run training

In [12]:
#
#   Callbacks
#

# tb_logdir = os.path.expanduser('~/logs/')
# tb_counter  = len([log for log in os.listdir(tb_logdir) if 'cats' in log]) + 1
# callback_tb = tf.keras.callbacks.TensorBoard(
#     log_dir=tb_logdir + 'cats' + '_' + str(tb_counter), )

callback_mc = tf.keras.callbacks.ModelCheckpoint(
    'model.h5', save_best_only=True, verbose=1)

callback_lr = tf.keras.callbacks.ReduceLROnPlateau(
    monitor='val_loss', factor=0.2, patience=5, verbose=1)

# callback_plt = CallbackPlot()

#
#   Train
#
hist = model.fit(train_features, train_targets, epochs=50, batch_size=32, shuffle=True,
  validation_data=(valid_features, valid_targets),
  callbacks=[
             #callback_tb,
             callback_mc,
             callback_lr,
             #callback_plt,
            ]
)
Train on 8000 samples, validate on 1997 samples
WARNING:tensorflow:From /home/marcin/.anaconda/envs/tfgpu113/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/marcin/.anaconda/envs/tfgpu113/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Epoch 1/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.2001
Epoch 00001: val_loss improved from inf to 0.19934, saving model to model.h5
8000/8000 [==============================] - 105s 13ms/sample - loss: 0.1995 - val_loss: 0.1993
Epoch 2/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0590
Epoch 00002: val_loss improved from 0.19934 to 0.09683, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0590 - val_loss: 0.0968
Epoch 3/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0452
Epoch 00003: val_loss improved from 0.09683 to 0.06448, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0452 - val_loss: 0.0645
Epoch 4/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0247
Epoch 00004: val_loss did not improve from 0.06448
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0247 - val_loss: 0.0850
Epoch 5/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0152
Epoch 00005: val_loss improved from 0.06448 to 0.05723, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0152 - val_loss: 0.0572
Epoch 6/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0122
Epoch 00006: val_loss improved from 0.05723 to 0.05628, saving model to model.h5
8000/8000 [==============================] - 95s 12ms/sample - loss: 0.0122 - val_loss: 0.0563
Epoch 7/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0100
Epoch 00007: val_loss improved from 0.05628 to 0.03418, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0100 - val_loss: 0.0342
Epoch 8/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0085
Epoch 00008: val_loss improved from 0.03418 to 0.02939, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0085 - val_loss: 0.0294
Epoch 9/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0077
Epoch 00009: val_loss improved from 0.02939 to 0.02304, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0077 - val_loss: 0.0230
Epoch 10/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0057
Epoch 00010: val_loss improved from 0.02304 to 0.01828, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0057 - val_loss: 0.0183
Epoch 11/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0044
Epoch 00011: val_loss improved from 0.01828 to 0.01779, saving model to model.h5
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0044 - val_loss: 0.0178
Epoch 12/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0042
Epoch 00012: val_loss improved from 0.01779 to 0.01318, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0042 - val_loss: 0.0132
Epoch 13/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0041
Epoch 00013: val_loss did not improve from 0.01318
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0041 - val_loss: 0.0140
Epoch 14/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0036
Epoch 00014: val_loss did not improve from 0.01318
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0036 - val_loss: 0.0188
Epoch 15/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0033
Epoch 00015: val_loss improved from 0.01318 to 0.01132, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0033 - val_loss: 0.0113
Epoch 16/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0032
Epoch 00016: val_loss improved from 0.01132 to 0.01016, saving model to model.h5
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0032 - val_loss: 0.0102
Epoch 17/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0039
Epoch 00017: val_loss improved from 0.01016 to 0.01007, saving model to model.h5
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0039 - val_loss: 0.0101
Epoch 18/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0039
Epoch 00018: val_loss did not improve from 0.01007
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0039 - val_loss: 0.0103
Epoch 19/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0034
Epoch 00019: val_loss did not improve from 0.01007
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0034 - val_loss: 0.0105
Epoch 20/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0034
Epoch 00020: val_loss improved from 0.01007 to 0.00966, saving model to model.h5
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0034 - val_loss: 0.0097
Epoch 21/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0030
Epoch 00021: val_loss improved from 0.00966 to 0.00877, saving model to model.h5
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0030 - val_loss: 0.0088
Epoch 22/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0026
Epoch 00022: val_loss improved from 0.00877 to 0.00628, saving model to model.h5
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0026 - val_loss: 0.0063
Epoch 23/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0026
Epoch 00023: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0026 - val_loss: 0.0090
Epoch 24/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0096
Epoch 00024: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0096 - val_loss: 0.0695
Epoch 25/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0111
Epoch 00025: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0111 - val_loss: 0.0543
Epoch 26/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0052
Epoch 00026: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0052 - val_loss: 0.0240
Epoch 27/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0035
Epoch 00027: val_loss did not improve from 0.00628

Epoch 00027: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
8000/8000 [==============================] - 80s 10ms/sample - loss: 0.0035 - val_loss: 0.0182
Epoch 28/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0024
Epoch 00028: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0024 - val_loss: 0.0112
Epoch 29/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0021
Epoch 00029: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0021 - val_loss: 0.0086
Epoch 30/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0020
Epoch 00030: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0020 - val_loss: 0.0071
Epoch 31/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0019
Epoch 00031: val_loss did not improve from 0.00628
8000/8000 [==============================] - 79s 10ms/sample - loss: 0.0019 - val_loss: 0.0063
Epoch 32/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0019
Epoch 00032: val_loss improved from 0.00628 to 0.00571, saving model to model.h5
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0019 - val_loss: 0.0057
Epoch 33/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0018
Epoch 00033: val_loss improved from 0.00571 to 0.00550, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0018 - val_loss: 0.0055
Epoch 34/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0018
Epoch 00034: val_loss improved from 0.00550 to 0.00546, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0018 - val_loss: 0.0055
Epoch 35/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0017
Epoch 00035: val_loss improved from 0.00546 to 0.00524, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0017 - val_loss: 0.0052
Epoch 36/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0017
Epoch 00036: val_loss improved from 0.00524 to 0.00518, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0017 - val_loss: 0.0052
Epoch 37/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0016
Epoch 00037: val_loss did not improve from 0.00518
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0016 - val_loss: 0.0053
Epoch 38/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0016
Epoch 00038: val_loss improved from 0.00518 to 0.00509, saving model to model.h5
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0016 - val_loss: 0.0051
Epoch 39/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0016
Epoch 00039: val_loss did not improve from 0.00509
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0016 - val_loss: 0.0052
Epoch 40/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00040: val_loss did not improve from 0.00509
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0052
Epoch 41/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00041: val_loss improved from 0.00509 to 0.00499, saving model to model.h5
8000/8000 [==============================] - 78s 10ms/sample - loss: 0.0015 - val_loss: 0.0050
Epoch 42/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00042: val_loss did not improve from 0.00499
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0051
Epoch 43/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00043: val_loss did not improve from 0.00499
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0051
Epoch 44/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00044: val_loss did not improve from 0.00499
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0050
Epoch 45/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00045: val_loss did not improve from 0.00499
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0052
Epoch 46/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0015
Epoch 00046: val_loss did not improve from 0.00499

Epoch 00046: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05.
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0015 - val_loss: 0.0057
Epoch 47/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0013
Epoch 00047: val_loss improved from 0.00499 to 0.00483, saving model to model.h5
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0013 - val_loss: 0.0048
Epoch 48/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0012
Epoch 00048: val_loss did not improve from 0.00483
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0012 - val_loss: 0.0048
Epoch 49/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0012
Epoch 00049: val_loss improved from 0.00483 to 0.00471, saving model to model.h5
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0012 - val_loss: 0.0047
Epoch 50/50
7968/8000 [============================>.] - ETA: 0s - loss: 0.0012
Epoch 00050: val_loss did not improve from 0.00471
8000/8000 [==============================] - 77s 10ms/sample - loss: 0.0012 - val_loss: 0.0048

Plot loss during training

In [13]:
_, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(12,4))
ax1.plot(hist.history['loss'], label='loss')
ax1.plot(hist.history['val_loss'], label='val_loss')
ax1.legend()

ax2.plot(hist.history['loss'], label='loss')
ax2.plot(hist.history['val_loss'], label='val_loss')
ax2.legend()
ax2.set_ylim(0, .1)

ax3.plot(hist.history['lr'], label='lr')
ax3.legend()

plt.tight_layout()
plt.show()

Show some cats from validation set - looks pretty good

In [14]:
for j in range(0, 50, 5):
    plot_images([i for i in range(j, j+5)], valid_images, valid_features)

Investigate more closely

In [64]:
idx = 73
plot_images([idx], valid_images, valid_features)